LING 575 — Ethical Considerations in NLP
A quick pitch for my upcoming WIN 19 course, covering some of the topics from this course.
N.B.: If you are a student enrolled in this course, please refer to the course page on Canvas. This page is for public-facing and archival purposes and may not be up-to-date for coursework.
Course Info
- Lecture: Wednesdays, 3:30-5:50 in MGH 288 and online
Instructor Info
- Ryan Georgi
- Office Hours: Wednesdays 12:30-2:30.
- Office: GUG 418-D
Description
The goal of this course is to better understand the ethical considerations that arise in the deployment of NLP technology, including (but not limited to) considerations of demographic misrepresentation, bias confirmation, and privacy. We will start with foundations in ethics, and then move to the current and growing research literature on ethics in NLP and allied fields, before considering specific NLP tasks, data sets and training methodologies through the lens of the ethical considerations identified. Course projects are expected to take the form of a term paper analyzing some particular NLP task or data set in terms of the concepts developed through the quarter and looking forward to how ethical best practices could be developed for that task/data set.
In particular, I hope to find answers to the following guiding questions over the course of the term:
- What ethical considerations arise in the design and deployment of NLP technologies?
- Which of these are specific to NLP (as opposed to AI or technology more generally?)
- What best practices can/should NLP developers deploy in light of the ethical concerns identified?
Note: To request academic accommodations due to a disability, please contact Disability Resources for Students , 448 Schmitz, 206-543-8924 (V/TTY). If you have a letter from DSR indicating that you have a disability which requires academic accommodations, please present the letter to the instructor so we can discuss the accommodations you might need in this class.
Requirements
- KWLA paper (approx 7 pages) (15)
- Proposed NLP/ML ethics code critique (20)
- Participation in discussions (incl. Canvas) (15)
- Term project (50)
Schedule of Topics and Assignments (subject to change)
Date | Topic | Reading | Due |
---|---|---|---|
3/28 | Introduction, organization Why are we here? What do we hope to accomplish? |
Hovy and Spruit 2016 plus at least 2 other papers/articles listed under Overviews/Calls to Action (or just one, if you pick something particularly long) | |
4/2 | KWLA papers: K & W due 11pm | ||
4/4 | Philosophical foundations | 2 items from Philosophical Foundations, at least one of which comes from an author whose perspective varies greatly from your own life experience. Be prepared to discuss the following:
|
|
4/11 | Philosophical foundations (cont) | ||
4/18 | Exclusion/Discrimination/Bias | 3–4 items from Exclusion/Discrimination/Bias, considering the following reading questions (not all of which are necessarily appropriate for all readings):
|
|
4/25 | Word Embeddings and Language Behavior as Ground Truth Chat bots |
2 items from each of Word Embeddings and Language Behavior as Ground Truth and Chat bots, considering the following reading questions (not all of which are necessarily appropriate for all readings):
|
|
5/2 | Proposed code of ethics for ACL Term project brainstorm |
Details
|
|
5/7 | Term paper proposals due | ||
5/9 | Value Sensitive Design | Read any two other papers from Value Sensitive Design. Reading questions:
In addition, for an NLP project you are interested in:
|
|
5/14 | Proposed NLP/ML ethics code critique due | ||
5/16 | Other Best Practices | Read at least three papers from Other Best Practices. Reading/discussion questions:
|
Term paper outline due |
5/23 | Privacy | Read at least three papers from Privacy. At least one should be from a CS-type perspective and at least one from a non-CS scholarly perspective (social sciences or law). Reading/discussion questions:
|
|
5/28 | Term paper draft due | ||
5/30 | NLP Applications Addressing Ethical Issues | Choose three of the items under NLP Apps Addressing Ethical Issues below and be prepared to discuss the following reading questions:
|
|
6/1 | KWLA papers due Comments on partner’s paper draft due |
||
6/6 | Final papers due 11pm |
Bibliography
- Overviews/Calls to Action
- Philosophical Underpinnings
- Human Subjects & Social Media Research
- Exclusion/Representation/Discrimination/Bias
- Word Embeddings and Language Behavior as Ground Truth
- Chat Bots
- Abusive Language Online
- Privacy
- NLP Applications
- Crowdsourcing
- Other
- Value Sensitive Design
- Proposals for Codes of Ethics
- Other Best Practices
- Workshops
- Other Resources
- Other Courses
Overviews/Calls to Action
- Amblard, M. (2016). Pour un TAL responsable. Traitement Automatique des Langues, 57 (2), 21-45.
- Ceglowski, M. (2016, June 26). The Moral Economy of Tech. SASE 2016.
- Crawford, K., & Calo, R. (2016). There is a blind spot in AI research. Nature, 538 (7625), 311.
- Escartín, C. P., W. Reijers, T. Lynn, J. Moorkens, A. Way, and C.-H. Liu, 2017: Ethical Considerations in NLP Shared Tasks. Proceedings of the First Workshop on Ethics in Natural Language Processing.
- Executive Office of the President National Science and Technology Council Committee on Technology. (2016). Preparing for the future of artificial intelligence.
- Fort, K., Adda, G., & Cohen, K. B. (2016). Ethique et traitement automatique des langues et de la parole : entre truismes et tabous. Traitement Automatique des Langues, 57 (2), 7-19.
- Hovy, D., & Spruit, S. L. (2016). The social impact of natural language processing. In Proceedings of the 54th annual meeting of the association for computational linguistics (volume 2: Short papers) (pp. 591-598). Berlin, Germany: Association for Computational Linguistics.
- Lefeuvre-Halftermeyer, A., Govaere, V., Antoine, J.-Y., Allegre, W., Pouplin, S., Departe, J.-P., et al. (2016). Typologie des risques pour une analyse éthique de l’impact des technologies du TAL. Traitement Automatique des Langues, 57 (2), 47-71.
- Leidner, Jochen L and Plachouras, Vassilis. 2017. Ethical by Design: Ethics Best Practices for Natural Language Processing. In Proceedings of the First Workshop on Ethics in Natural Language Processing, pages 8–18, Valencia, Spain, April.
- Markham, A. (May 18, 2016). OKCupid data release fiasco: It’s time to rethink ethics education. Data & Society: Points.
- O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. NY: Crown Publishing Group.
- Rogaway, P. (2015). The moral character of cryptographic work.
- Shneiderman, B. (2016). Opinion: The dangers of faulty, biased, or malicious algorithms requires independent oversight. Proceedings of the National Academy of Sciences, 113 (48), 13538-13540.
- Sourour, B. (Nov 13, 2016). The code I’m still ashamed of. Medium.com.
Philosophical Underpinnings
- Bartky, S. L. (2002). “Sympathy and solidarity” and other essays (Vol. 32). Rowman & Littlefield.
- Bryson, J. J. (2015). Artificial intelligence and pro-social behaviour. In C. Misselhorn (Ed.), Collective agency and cooperation in natural and artificial systems: Explanation, implementation and simulation (pp. 281-306). Cham: Springer International Publishing.
- Butler, J. (2005). Giving an account of oneself. Oxford University Press. (Available online, through UW libraries)
- DeLaTorre, M. A. (2013). Ethics: A liberative approach. Fortress Press. (Available online through UW Libraries; read intro + chapter of choice)
- Edgar, S. L. (2003). Morality and machines: Perspectives on computer ethics. Jones & Bartlett Learning. (Available online through UW libraries)
- Fieser, J., & Dowden, B. (Eds.). (2016). Internet encyclopedia of philosophy: Entries on Ethics
- Liamputtong, P. (2006). Researching the vulnerable: A guide to sensitive research methods. Sage. (Available online, through UW libraries)
- Moor, J.H. (1985). What is computer ethics? Metaphilosophy, 16:266–275, October.
- Quinn, M. J. (2014). Ethics for the information age. Pearson.
- Zalta, E. N. (Ed.). (2016). The Stanford encyclopedia of philosophy (Winter 2016 Edition ed.): Entries on Ethics
Human Subjects & Social Media Research
- Perlman, D. (2004, May 24). Ethics In Clincal Research A History Of Human Subject Protections And Practical Implementation Of Ethical Standards. SoCRA SOURCE, 37–41.
- Townsend, L., & Wallace, C. (2015). Social Media Research: A Guide to Ethics.
- Williams, M. L., Burnap, P., & Sloan, L. (2017). Towards an Ethical Framework for Publishing Twitter Data in Social Research: Taking into Account Users’ Views, Online Context and Algorithmic Estimation:. Sociology, 51(6), 1149–1168. http://doi.org/10.1177/0038038517708140
- Woodfield, K. (2018). The Ethics of Online Research. (K. Woodfield, Ed.) (1st ed., pp. 1–268). Emerald Publishing. [link to copy on canvas] [link to proquest page via UW library]
- Particularly, Chapters:
- 2: Users’ Views of Ethics in Social Media Research: Informed Consent, Anonymity, and Harm
- 5: Informed Consent in Qualitative Social Media Research
- 7: Ethical Challenges of Publishing and Sharing Social Media Research Data
- 8: The Ethics of Using Social Media Data in Research: A New Framework
- Particularly, Chapters:
Exclusion/Representation/Discrimination/Bias
- Angwin, J., & Larson, J. (Dec 30, 2016). Bias in criminal risk scores is mathematically inevitable, researchers say. ProPublica.
- Boyd, D. (2015). What world are we building? (Everett C Parker Lecture. Washington, DC, October 20)
- Crawford, K. (2017), The Trouble with Bias. NIPS 2017. [youtube video]
- Brennan, M. (2015). Can computers be racist? big data, inequality, and discrimination. (online; Ford Foundation)
- Chouldechova, A., & GSell, M. (2017). Fairer and more accurate, but for whom? Presented at the Fairness, Accountability, and Transparency in Machine Learning.
- Clark, J. (Jun 23, 2016). Artificial intelligence has a `sea of dudes’ problem. Bloomberg Technology.
- Crawford, K. (Apr 1, 2013). The hidden biases in big data. Harvard Business Review.
- Daumé III, H. (Nov 8, 2016). Bias in ML, and teaching AI. (Blog post, accessed 1/17/17)
- Emspak, J. (Dec 29, 2016). How a machine learns prejudice: Artificial intelligence picks up bias from human creators–not from hard, cold logic. Scientific American.
- Friedman, B., & Nissenbaum, H. (1996). Bias in computer systems. ACM Transactions on Information Systems (TOIS), 14(3), 330-347.
- Guynn, J. (Jun 10, 2016). `Three black teenagers’ Google search sparks outrage. USA Today.
- Hardt, M. (Sep 26, 2014). How big data is unfair: Understanding sources of unfairness in data driven decision making. Medium.
- Koolen, C., and A. van Cranenburgh, 2017: These are not the Stereotypes You are Looking For: Bias and Fairness in Authorial Gender Attribution. In Proceedings of the First Workshop on Ethics in Natural Language Processing.
- Jacob. (May 8, 2016). Deep learning racial bias: The avenue Q theory of ubiquitous racism. Medium.
- Larson, B. N., 2017: Gender as a variable in natural-language processing: Ethical considerations. Proceedings of the First Workshop on Ethics in Natural Language Processing, Valencia, Spain, 30–40.
- Larson, J., Angwin, J., & Parris Jr., T. (Oct 19, 2016). Breaking the black box: How machines learn to be racist. ProPublica.
- Morrison, L. (Jan 9, 2017). Speech analysis could now land you a promotion. BBC capital.
- Rao, D. (n.d.). Fairness in machine learning. (slides)
- Sweeney, L. (May 1, 2013). Discrimination in online ad delivery. Communications of the ACM, 56 (5), 44-54.
- Tatman, R., 2017: Gender and Dialect Bias in YouTube’s Automatic Captions. Proceedings of the First Workshop on Ethics in Natural Language Processing.
- Wang, Y., & Kosinski, M. (2018). Deep neural networks are more accurate than humans at detecting sexual orientation from facial images. Journal of Personality and Social Psychology, 114(2), 246–257. http://doi.org/10.1037/pspa0000098
- Responses:
- Hirschman, D. (2017, September 10). artificial intelligence discovers gayface. sigh. Retrieved March 30, 2018.
- Cohen, P. N. (2017, September 11). On artificially intelligent gaydar. Retrieved March 30, 2018.
- Responses:
- Wijeratne, S., Balasuriya, L., Doran, D., & Sheth, A. (2016). Word Embeddings to Enhance Twitter Gang Member Profile Identification. Presented at the IJCAI Workshop on Semantic Machine Learning.
- Yao, S., & Huang, B. (2017). New Fairness Metrics for Recommendation that Embrace Differences. Presented at the Fairness, Accountability, and Transparency in Machine Learning.
- Zliobaite, I. (2015). On the relation between accuracy and fairness in binary classification. CoRR, abs/1505.05723.
Word Embeddings and Language Behavior as Ground Truth
- Bolukbasi, T., Chang, K., Zou, J. Y., Saligrama, V., & Kalai, A. (2016). Man is to computer programmer as woman is to homemaker? Debiasing word embeddings. CoRR, abs/1607.06520.
- Caliskan-Islam, A., Bryson, J., & Narayanan, A. (2016). A story of discrimination and unfairness. (Talk presented at HotPETS 2016)
- Daumé III, H. (2016). Language bias and black sheep. (Blog post, accessed 12/29/16)
- Herbelot, A., Redecker, E. von, & Müller, J. (2012, April). Distributional techniques for philosophical enquiry. In Proceedings of the 6th workshop on language technology for cultural heritage, social sciences, and humanities (pp. 45-54). Avignon, France: Association for Computational Linguistics.
- Schmidt, B. (2015). Rejecting the gender binary: A vector-space operation. (Blog post, accessed 12/29/16)
Chat bots
- Fessler, Leah. (Feb 22, 2017). SIRI, DEFINE PATRIARCHY: We tested bots like Siri and Alexa to see who would stand up to sexual harassment. Quartz.
- Fung, P. (Dec 3, 2015). Can robots slay sexism? World Economic Forum.
- Mott, N. (Jun 8, 2016). Why you should think twice before spilling your guts to a chatbot. Passcode.
- Paolino, J. (Jan 4, 2017). Google home vs Alexa: Two simple user experience design gestures that delighted a female user. Medium.
- Seaman Cook, J. (Apr 8, 2016). From Siri to sexbots: Female AI reinforces a toxic desire for passive, agreeable and easily dominated women. Salon.
- Twitter. (Apr 7, 2016). Automation rules and best practices. (Web page, accessed 12/29/16)
- Yao, M. (n.d.). Can bots manipulate public opinion? (Web page, accessed 12/29/16)
Abusive Language Online
- Clarke, I., & Grieve, J. (2017). Dimensions of Abusive Language on Twitter. Presented at the First Workshop on Abusive Language Online. Retrieved from https://drive.google.com/file/d/0B4xDAGbwZJjQSlJzQWZscjhsa0E/view?usp=embed_facebook
- Gambäck, B., & Sikdar, U. K. (2017). Using Convolutional Neural Networks to Classify Hate-Speech. Presented at the Proceedings of the First Workshop on Abusive Language Online.
- Kennedy, G., McCollough, A., Dixon, E., Bastidas, A., Ryan, J., Loo, C., & Sahay, S. (2017). Technology Solutions to Combat Online Harassment. Proceedings of the First Workshop on Abusive Language Online, 73–77. http://doi.org/10.18653/v1/W17-3011
- Napoles, C., Pappu, A., & Tetreault, J. (2017). Automatically Identifying Good Conversations Online (Yes, They Do Exist!). Icwsm.
- Ross, B., Rist, M., Carbonell, G., Cabrera, B., Kurowsky, N., & Wojatzki, M. (2017, January 27). Measuring the Reliability of Hate Speech Annotations: The Case of the European Refugee Crisis. http://doi.org/10.17185/duepublico/42132
- Samghabadi, N. S., Maharjan, S., Sprague, A., Diaz-Sprague, R., & Solorio, T. (2017). Detecting Nastiness in Social Media. Presented at the First Workshop on Abusive Language Online, Vancouver, Canada.
- Waseem, Z., & Hovy, D. (2016). Hateful Symbols or Hateful People? Predictive Features for Hate Speech Detection on Twitter (pp. 88–93). Presented at the Proceedings of NAACL-HLT 2016, San Diego, California.
Privacy
- Abadi, M., Chu, A., Goodfellow, I., Brendan McMahan, H., Mironov, I., Talwar, K., et al. (2016). Deep Learning with Differential Privacy. ArXiv e-prints.
- Amazon.com. 2017. Memorandum of Law in Support of Amazon’s Motion to Quash Search Warrant
- Brant, T. (Dec 27, 2016). Amazon Alexa data wanted in murder investigation. PC Mag.
- Friedman, B., Kahn Jr, P. H., Hagman, J., Severson, R. L., & Gill, B. (2006). The watcher and the watched: Social judgments about privacy in a public place. Human-Computer Interaction, 21(2), 235-272.
- Golbeck, J., & Mauriello, M. L. (2016). User perception of facebook app data access: A comparison of methods and privacy concerns. Future Internet, 8(2), 9.
- Narayanan, A., & Shmatikov, V. (2010). Myths and fallacies of “personally identifiable information”. Communications of the ACM, 53 (6), 24-26.
- Nissenbaum, H. (2009). Privacy in context: Technology, policy, and the integrity of social life. Stanford: Stanford University Press.
- Solove, D. J. (2007). ‘I’ve got nothing to hide’ and other misunderstandings of privacy. San Diego Law Review, 44 (4), 745-772.
- Steel, E., & Angwin, J. (Aug 4, 2010). On the Web’s cutting edge, anonymity in name only. The Wall Street Journal.
- Tene, O., & Polonetsky, J. (2012). Big data for all: Privacy and user control in the age of analytics. Northwestern Journal of Technology and Intellectual Property, 11(45), 239-273.
- Vitak, J., Shilton, K., & Ashktorab, Z. (2016). Beyond the Belmont principles: Ethical challenges, practices, and beliefs in the online data research community. In Proceedings of the 19th ACM conference on computer-supported cooperative work & social computing (pp. 941-953).
NLP Apps Addressing Ethical Issues
- Fokkens, A. (2016). Reading between the lines. (Slides presented at Language Analysis Portal Launch event, University of Oslo, Sept 2016)
- Gershgorn, D. (Feb 27, 2017). NOT THERE YET: Alphabet’s hate-fighting AI doesn’t understand hate yet. Quartz.
- Google.com. (2017). The women missing from the silver screen and the technology used to find them. Blog post, accessed March 1, 2017.
- Greenberg, A. (2016). Inside Google’s Internet Justice League and Its AI-Powered War on Trolls. Wired.
- Kellion, L. (Mar 1, 2017) Facebook artificial intelligence spots suicidal users. BBC News.
- Munger, K. (2016). Tweetment effects on the tweeted: Experimentally reducing racist harassment. Political Behavior, 1-21.
- Munger, K. (Nov 17, 2016). This researcher programmed bots to fight racism on twitter. It worked. Washington Post.
- Murgia, M. (Feb 23, 2017). Google launches robo-tool to flag hate speech online. Financial Times.
- The times is partnering with jigsaw to expand comment capabilities. (Sep 20, 2016). The New York Times.
- Fake News Challenge
- Jigsaw Challenges
- Perspective (from Jigsaw)
- But see: Hosseini, H, S. Kannan, B. Zhang and R. Poovendran. 2017. Deceiving Google’s Perspective API Built for Detecting Toxic Comments. ArXiv.
- Textio See also:
- CEO Kieran Snyder’s posts on medium.com
- Recording of Kieran Snyder’s NLP Meetup talk from Aug 15, 2016
Crowdsourcing
- Bederson, B. B., & Quinn, A. J. (2011). Web workers unite! Addressing challenges of online laborers. In CHI’11 extended abstracts on human factors in computing systems (pp. 97-106).
- Callison-Burch, C. (2016). Crowd workers. (Slides from Crowdsoucing and Human Computation, accessed online 12/30/16)
- Callison-Burch, C. (2016). Ethics of crowdsourcing. (Slides from Crowdsoucing and Human Computation, accessed online 12/30/16)
- Fort, K., Adda, G., & Cohen, K. B. (2011). Amazon mechanical turk: Gold mine or coal mine? Computational Linguistics, 37 (2), 413-420.
- Snyder, J. (2010). Exploitation and sweatshop labor: Perspectives and issues. Business Ethics Quarterly, 20 (2), 187-213.
Other
- Cohen, K. B., Pestian, J., & Fort, K. (2015). Annotateurs volontaires investis et éthique de l’annotation de lettres de suicidés. In ETeRNAL (ethique et traitement automatique des langues).
- Fort, K., & Couillault, A. (2016). Yes, we care! results of the ethics and natural language processing surveys. In Proceedings of the tenth interna- tional conference on language resources and evaluation (LREC 2016). Paris, France: European Language Resources Association (ELRA).
- Gillespie, T. (2014). The relevance of algorithms. In T. Gillespie, P. J. Boczkowski, & K. A. Foot (Eds.), Media technologies: Essays on communication, materiality, and society (pp. 167-194). MIT Press.
- Hardt, M., Price, E., & Srebro, N. (2016). Equality of opportunity in supervised learning. (Accessed online, 12/30/16)
- Kleinberg, J. M., Mullainathan, S., & Raghavan, M. (2016). Inherent trade-offs in the fair determination of risk scores. CoRR, abs/1609.05807.
- Metcalf, J., & Crawford, K. (2016). Where are Human Subjects in Big Data Research? The Emerging Ethics Divide. Big Data and Society.
- Metcalf, J., Keller, E. F., & boyd, d. (2016). Perspectives on big data, ethics, and society. (Accessed 12/30/16)
- Meyer, M. N. (2015). Two cheers for corporate experimentation: The A/B illusion and the virtues of data-driven innovation. Colo. Tech. L.J., 13, 273.
- Wallach, H. (Dec 19, 2014). Big data, machine learning, and the social sciences: Fairness, accountability, and transparency. Medium.
- Wattenberg, M., Viégas, F., & Hardt, M. (Oct 7, 2016). Attacking discrimination with smarter machine learning.
Value Sensitive Design
- Borning, A., & Muller, M. (2012). Next steps for value sensitive design. In Proceedings of the SIGCHI conference on human factors in computing systems (pp. 1125-1134).
- Friedman, B. (1996). Value-sensitive design. ACM Interactions, 3 (6), 17-23.
- Friedman, B., & Hendry, D. (To appear). Value Sensitive Design: a twenty-year synthesis and retrospective. In Foundations and trends in human computer interaction.
- Friedman, B., Hendry, D. G., & Borning, A. (2017). A Survey of Value Sensitive Design Methods. Foundations and Trends® in Human–Computer Interaction, 11(2), 63–125. http://doi.org/10.1561/1100000015
- Friedman, B., & Kahn Jr., P. H. (2008). Human values, ethics, and design. In J. A. Jacko & A. Sears (Eds.), The human-computer interaction handbook (Revised second ed., pp. 1241-1266). Mahwah, NJ.
- Nathan, L. P., Klasnja, P. V., & Friedman, B. (2007). Value scenarios: a technique for envisioning systemic effects of new technologies. In CHI’07 extended abstracts on human factors in computing systems (pp. 2585-2590).
(Proposals for) codes of ethics
- ACM Ethics Task Force. (2016). Code 2018 | ACM ethics. (Web page, accessed 1/5/17)
- The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems. (2016). Ethically aligned design: A vision for prioritizing human wellbeing with artificial intelligence and autonomous systems (AI/AS) (Version 1 — For Public Discussion).
- Etlinger, S., & Groopman, J. (2015). The trust imperative: A framework for ethical data use.
- Daumé III, H. (Dec 12, 2016). Should the NLP and ML Communities have a Code of Ethics? (Blog post, accessed 12/30/16)
Other Best Practices
- Amodei, D., Olah, C., Steinhardt, J., Christiano, P., Schulman, J., & Mané, D. (2016). Concrete problems in AI safety. CoRR, abs/1606.06565.
- Markham, A. (2012). Fabrication as ethical practice: Qualitative inquiry in ambiguous Internet contexts. Information, Communication & Society, 15(3), 334-353.
- Ratto, M. (2011). Critical making: Conceptual and material studies in technology and social life. The Information Society, 27 (4), 252-260.
- Russell, S., Dewey, D., & Tegmark, M. (2015). Research priorities for robust and benefcial artifcial intelligence. AI Magainze.
- Shilton, K., & Anderson, S. (2016). Blended, not bossy: Ethics roles, responsibilities and expertise in design. Interacting with Computers.
- Shilton, K., & Sayles, S. (2016). “We aren’t all going to be on the same page about ethics”: Ethical practices and challenges in research on digital and social media. In 2016 49th Hawaii international conference on system sciences (HICSS) (pp. 1909-1918).
Links
Conferences/Workshops
- Ethics in Natural Language Processing (2017) (2018) at NAACL 2018, June 5th, New Orleans, Louisiana, USA.
- 3rd International Workshop on AI, Ethics and Society 4th or 5th February 2017 San Francisco, USA
- PDDM16 The 1st IEEE ICDM International Workshop on Privacy and Discrimination in Data Mining December 12, 2016 – Barcelona
- Machine Learning and the Law NIPS Symposium 8 December, 2016 Barcelona, Spain
- AAAI Fall Symposium on Privacy and Language Technologies, November 2016
- Workshop on Data and Algorithmic Transparency (DAT’16) November 19, 2016, New York University Law School
- WSDM 2016 Workshop on the Ethics of Online Experimentation, February 22, 2016 San Francisco, California
- ETHI-CA2 2016: ETHics In Corpus Collection, Annotation and Application LREC 2016, Protoroz, Slovenia.
- Fairness, Accountability, and Transparency in Machine Learning, 2014, 2015, 2016
- ETeRNAL – Ethique et TRaitemeNt Automatique des Langues June 22, 2015, Caen
- Éthique et Traitement Automatique des Langues, Journée d’étude de l’ATALA Paris, France, November 2014
Other lists of resources
- Critical Algorithm Stuides
- FATML resources page
- The Responsible Conduct of Computational Modeling and Research NSF funded project
Other courses
- A Course on Fairness, Accountability and Transparency in Machine Learning (Suresh Venkatasubramanian)
- Ethics for the Information Age (Michael Quinn)
- Another list of courses from Thomast Morgan Jr. at ETSU
- The Dark Side of NLP: Gefahren automatischer Sprachverarbeitung (Michael Strube)