A Ph.D. Student at UMass Lowell
Best Thematic Paper Award at NAACL 2019
This work focuses on mitigating bias concerning sensitive attributes, such as gender, race, age, in a setting where these attributes are not available, or it is not legal to use them. We proposed a way to leverage biases inherent to word embeddings to improve the fairness of an underlying model. We use embeddings of peoples’ names as universal proxies to jointly reduce prominent biases, presented in our society. In contrast to previous works, our method does not require access to protected attributes during testing and mitigates several biases simultaneously.
We proposed a method of decomposition of text representation into several independent vectors, each responsible for a specific aspect of the input sentence. We evaluated the proposed method on two case studies: the conversion between different social registers and diachronic language change. The model uses adversarial-motivational training and includes a special motivational loss, which acts opposite to the discriminator and encourages a better decomposition.
This work presents a new dataset for computational humor, specifically comparative humor ranking,
which attempts to eschew the ubiquitous binary approach to humor detection.
The dataset consists of tweets that are humorous responses submitted to a Comedy Central TV show
While a strong RNN token-level system can only achieve 55% accuracy,
a character-level CNN system achieved 63.7% accuracy,
likely due to a large amount of puns that can be captured by a character-level model.
Full list of publications is available on Google Scholar
A. Romanov, M. De-Arteaga, H. Wallach, J. Chayes, C. Borgs, A. Chouldechova, S. Geyik, K.
Kalai, A. Rumshisky
What’s in a name? Reducing bias in bios without access to protected attributes
In Proceedings of NAACL 2019: Conference of the North American Chapter of the Association for Computational Linguistics, 2019 Best Thematic Paper Award
A. Romanov, A. Rumshisky, A. Rogers, D. Donahue
Adversarial decomposition of text representation
In Proceedings of NAACL 2019: Conference of the North American Chapter of the Association for Computational Linguistics, 2019
M. De-Arteaga, A. Romanov, H. Wallach, J. Chayes, C. Borgs, A. Chouldechova, S. Geyik, K.
Bias in Bios: A Case Study of Semantic Representation Bias in a High-Stakes Setting
In Proceedings of FAT* 2019: ACM Conference on Fairness, Accountability, and Transparency, 2019
A. Romanov, C. Shivade
Lessons from Natural Language Inference in the Clinical Domain
In Proceedings of EMNLP 2018: Conference on Empirical Methods in Natural Language Processing, 2018