Call for Papers: First Workshop on Ethics in Natural Language Processing

To be held at EACL 2017 in Valencia on April 3 or 4, 2017

Submission deadline: Jan 16, 2017


NLP is a rapidly maturing field. NLP technologies now play a role in business applications and decision processes that affect billions of people on a daily basis. However, increasing amounts of data and computational power also mean increased responsibility and new questions for researchers and practitioners. For example, are we inadvertently building unfair biases into our data sets and models? What information is it ethical to infer from user data? How can we prioritize accountability and transparency? What are the big picture ethical consequences and implications of our work?

This one-day, interdisciplinary workshop will bring together researchers and practitioners in NLP with researchers in the humanities, social sciences, public policy, and law to identify and discuss some of the most pressing issues surrounding ethics in NLP. The focus will be on ethics as it relates to the practice of NLP—i.e., actual uses of NLP technologies—not on general aspects of academic ethics (e.g., conflicts of interest, double blind reviewing, etc.), unless they can be addressed with NLP technologies.

The workshop will consist of:
– invited talks,
– contributed talks and posters
– panel discussions

Topics of Interest:
We invite submissions by researchers and practitioners in NLP as well as the humanities, social sciences, public policy, and law on any area of NLP related to:

· Bias in NLP models (e.g., reporting bias, implicit bias).
· Exclusion and inclusion (e.g., exclusion of certain groups or beliefs, how/when to include stakeholders and representatives for the user population to be served).
· Overgeneralization (e.g., making false classifications on tasks including authorship attribution, NER, knowledge base population).
· Exposure (e.g., underrepresentation/overrepresentation of languages or groups).
· Dual use (e.g., the positive and negative aspects of NLP applications, the close relationship between government and industry interests and NLP research).
· Privacy protection (e.g., anonymization of biomedical documents, best practices for researchers in industry to ensure the privacy of their users’ data, educating the public about how much industry and government may know about them, privacy protection for data annotated with non-linguistic features such as emotion).
· Any other topic which concerns ethical considerations in NLP.

Paper submission:
Submissions have to be made electronically via the START submission system: Submissions should be in PDF format and anonymized for review.

All submissions must be written in English and follow the EACL 2017 formatting requirements (available on the EACL 2017 website: We strongly advise the use of the LaTeX template files provided by EACL 2017:
· Each long paper submission must consist of up to eight pages of content, plus two pages for references. Accepted long papers will be given one additional page (i.e., up to nine pages) for content, with unlimited pages for references.
· Each short paper submission must consist of up to four pages of content, plus two pages for references. Accepted short papers will also be given one additional page (i.e., up to five pages) of content, with unlimited pages for references.

All submissions will be peer reviewed, but authors can opt for non-archival submission, since some journals won’t accept work that has been published previously.

Organizing committee:

Dirk Hovy, University of Copenhagen, Denmark
Margaret Mitchell, Google Research, USA
Shannon Spruit, Technical University Delft, The Netherlands
Michael Strube, Heidelberg Institute for Theoretical Studies gGmbH, Germany
Hanna Wallach, Microsoft Research, USA