Maybe at the Nobel lecture we'll finally find out how many legs Hinton's cat really has... #acl2023 #throwback @7c0h https://www.nobelprize.org/prizes/physics/2024/press-release/
Maybe at the Nobel lecture we'll finally find out how many legs Hinton's cat really has... #acl2023 #throwback @7c0h https://www.nobelprize.org/prizes/physics/2024/press-release/
GAIA Search: Hugging Face and Pyserini Interoperability for NLP Training Data Exploration (#ACL2023 demo)
Giulia (Cohort 4) just started her PhD in October 2022, and was accepted to present her work at the #ACL2023 conference. An important step for her career and a great occasion to meet new colleagues working on NLP all over the world. Check out her paper here: https://aclanthology.org/2023.clinicalnlp-1.24/
See the sheer joy of my collaborators at #ACL2023 when
DissentQA
won best Paper AC award
This is a happy outcome of the fruitful collaboration with a group of wonderfully friendly people
https://arxiv.org/abs/2211.05655
#nlproc #machinelearning #Qa #factuality
Presenting Riveter , a Python package to measure social dynamics between personas mentioned in text.
Given a verb lexicon, Riveter can extract entities and visualize relationships between them.
Package: https://github.com/maartensap/riveter-nlp
Paper: http://maartensap.com/pdfs/antoniak2023riveter.pdf
Video: https://youtube.com/watch?v=Uftyd8eCmFw
Demo Notebook: https://github.com/maartensap/riveter-nlp/blob/main/riveter/demo.ipynb
With Anjalie Field, Jimin Mun, Melanie Walsh, Lauren Klein, Maarten Sap
Check out our new paper "DARE: Towards Robust Text Explanations in Biomedical and Healthcare Applications", oral at @aclmeeting by lead author Adam Ivankay this Wednesday!
We show adversarial attacks to #explainability methods for #DeepNeuralNetworks in technical text domains, propose a quantification of this problem, and initial solutions.
Presentation: https://virtual2023.aclweb.org/paper_P1265.html
Paper: https://arxiv.org/abs/2307.02094
Code: https://github.com/ibm/domain-adaptive-attribution-robustness
Excited to share our new #acl2023 Findings paper! #nlp #nlproc
We introduce NeQA, a dataset consisting of questions with negation, in which language models exhibit inverse, U-shaped, or positive scaling, depending on prompting methods or model families.
https://arxiv.org/abs/2305.17311
How can we explore hidden biases in language models impacting fairness? Our #ACL2023 demo paper introduces Finspector, an interactive visualization widget available as a Python package for Jupyter, that helps uncover these biases.
Paper, Video, Code: https://www.bckwon.com/publication/finspector/
arXiv: https://arxiv.org/abs/2305.16937
GitHub: https://github.com/IBM/finspector
Our paper showcases a use case and discusses implications, limitations, and future work.
Two UKP papers have been accepted to *SEM 2023, which will happen July 13-14 as part of #ACL2023! Read the pre-prints here:
https://arxiv.org/abs/2211.01874
https://arxiv.org/pdf/2205.06733
Congratulations to the authors Tilman Beck, Andreas Waldis, Dominic Petrak, Nafise Sadat Moosavi and Iryna Gurevych!
Next, in Findings of ACL, there's "CoRRPUS: Code-based Structured Prompting for Neurosymbolic Story Understanding" by Yijiang River Dong, myself, & @ccb
I've mentioned this work before, but now it's published! Here, we used code-based #LLMs like Codex (RIP) to provide structure in story understanding, which we saw helps the LLM figure out what characters are doing better!
https://arxiv.org/abs/2212.10754
2/2 #ACL2023
Yay! 2/2 papers accepted at #ACL2023 !
First, in the main conference, there's
"FIREBALL: A Dataset of Dungeons and Dragons Actual-Play with Structured Game State Information" by @zhuexe, Karmanya Aggarwal, Alex Feng, myself, & @ccb
Contributions:
- A corpus of data from people playing #DnD on Discord using a bot called #Avrae, made by @zhuexe himself. Avrae tracks vital game state information for D&D.
- #LLMs "translating" Avrae commands into plain English.
https://arxiv.org/abs/2305.01528
1/2
Writing the "Limitations" section for an #ACL2023 submission feels a bit like scripting the text for Reviewer 2.
1. Don't accept reviewing without carefully checking your calendar
2. Don't be late with your reviews without notifying ACs on time
3. Don't ignore ACs
4. Don't be late again to a new set date
5. Don't disappear when the discussion starts
1/