sigmoid.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A social space for people researching, working with, or just interested in AI!

Server stats:

599
active users

#emnlp2023

0 posts0 participants0 posts today
WikiResearch<p>RT by <span class="h-card" translate="no"><a href="https://mastodon.social/@wikiresearch" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>wikiresearch</span></a></span>: The video of the presentation w/ @rnav_arora of our <a href="https://mastodon.social/tags/EMNLP2023" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>EMNLP2023</span></a> paper on Transparent Stance Detection in Multilingual Wikipedia Editor Discussions predicting Wikipedia policies for content moderation is now online at <br><a href="https://youtu.be/UUuC6Q1SIoM?t=2190" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">youtu.be/UUuC6Q1SIoM?t=2190</span><span class="invisible"></span></a> <a href="https://twitter.com/frimelle/status/1747919662405353701" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">twitter.com/frimelle/status/17</span><span class="invisible">47919662405353701</span></a></p>
Carsten Eickhoff<p>Some highlights from EMNLP 2023</p><p><a href="https://health-nlp.com/posts/emnlp23.html" rel="nofollow noopener" target="_blank"><span class="invisible">https://</span><span class="ellipsis">health-nlp.com/posts/emnlp23.h</span><span class="invisible">tml</span></a></p><p><a href="https://idf.social/tags/EMNLP2023" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>EMNLP2023</span></a> <a href="https://idf.social/tags/ml" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ml</span></a> <a href="https://idf.social/tags/machinelearning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>machinelearning</span></a> <a href="https://idf.social/tags/NLProc" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>NLProc</span></a> <a href="https://idf.social/tags/NLP" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>NLP</span></a> <a href="https://idf.social/tags/ai" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ai</span></a> <a href="https://idf.social/tags/artificialintelligence" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>artificialintelligence</span></a> <a href="https://idf.social/tags/conference" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>conference</span></a></p>
WikiResearch<p>RT by <span class="h-card" translate="no"><a href="https://mastodon.social/@wikiresearch" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>wikiresearch</span></a></span>: Excited to start the new year by presenting our <a href="https://mastodon.social/tags/EMNLP2023" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>EMNLP2023</span></a> paper on Transparent Stance Detection in Multilingual Wikipedia Editor Discussions w/ @rnav_arora @IAugenstein at the @Wikimedia Research Showcase! <br>Online, 17.01., 17:30 UTC</p><p><a href="https://www.mediawiki.org/wiki/Wikimedia_Research/Showcase#January_2024" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">mediawiki.org/wiki/Wikimedia_R</span><span class="invisible">esearch/Showcase#January_2024</span></a> <span class="h-card" translate="no"><a href="https://mastodon.social/@wikiresearch" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>wikiresearch</span></a></span> <a href="https://twitter.com/frimelle/status/1746569501284368467" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">twitter.com/frimelle/status/17</span><span class="invisible">46569501284368467</span></a></p>
WikiResearch<p>RT by <span class="h-card" translate="no"><a href="https://mastodon.social/@wikiresearch" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>wikiresearch</span></a></span>: Thanks <span class="h-card" translate="no"><a href="https://mastodon.social/@wikiresearch" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>wikiresearch</span></a></span> for sharing!</p><p>Our <a href="https://mastodon.social/tags/emnlp2023" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>emnlp2023</span></a> paper is also available on ACL Anthology now: <a href="https://aclanthology.org/2023.emnlp-main.100/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">aclanthology.org/2023.emnlp-ma</span><span class="invisible">in.100/</span></a> <a href="https://twitter.com/ConiaSimone/status/1740084544881963053" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">twitter.com/ConiaSimone/status</span><span class="invisible">/1740084544881963053</span></a></p>
UKP Lab<p>A paper on the topic by Max Glockner (UKP Lab), <span class="h-card" translate="no"><a href="https://sigmoid.social/@ievaraminta" class="u-url mention">@<span>ievaraminta</span></a></span> Staliūnaitė (University of Cambridge), James Thorne (KAIST AI), Gisela Vallejo (University of Melbourne), Andreas Vlachos (University of Cambridge) and Iryna Gurevych was accepted to TACL and has just been presented at <a href="https://sigmoid.social/tags/EMNLP2023" class="mention hashtag" rel="tag">#<span>EMNLP2023</span></a>.</p><p>📄 <a href="https://arxiv.org/abs/2104.00640" target="_blank" rel="nofollow noopener" translate="no"><span class="invisible">https://</span><span class="">arxiv.org/abs/2104.00640</span><span class="invisible"></span></a> </p><p>➡️ <a href="https://sigmoid.social/@UKPLab/111561356090955507" target="_blank" rel="nofollow noopener" translate="no"><span class="invisible">https://</span><span class="ellipsis">sigmoid.social/@UKPLab/1115613</span><span class="invisible">56090955507</span></a></p>

A group photo from the poster presentation of »AmbiFC: Fact-Checking Ambiguous Claims with Evidence«, co-authored by our colleague Max Glockner, @ievaraminta, James Thorne, Gisela Vallejo, Andreas Vlachos and Iryna Gurevych.

A successful has come to an end! A group photo of our colleagues Yongxin Huang, Jonathan Tonglet, Aniket Pramanick, Sukannya Purkayastha, Dominic Petrak and Max Glockner, who represented the UKP Lab in Singapore!

Continued thread

What makes the difference 🧐 ?

We attribute the effectiveness of the sentence encoding adapter to the consistency between the pre-training and DAPT objectives of the base PLM. If the base PLM is domain-adapted with another loss, the adapter won’t be compatible any more, reflected in a performance drop. (5/🧵)

Continued thread

AdaSent decouples DAPT and SEPT by storing the sentence encoding abilities into an adapter, which is trained only once in the general domain and plugged into various DAPT-ed PLMs. It can match or surpass the performance of DAPT→SEPT, with more efficient training. (4/🧵)

Continued thread

Domain-adapted sentence embeddings can be created by applying general-domain SEPT on top of a domain-adapted base PLM (DAPT→SEPT). But this requires the same SEPT procedure to be done on each DAPT-ed PLM for every domain, resulting in computational inefficiency. (3/🧵)

Continued thread

In our paper we demonstrate AdaSent's effectiveness in extensive experiments on 17 different few-shot sentence classification datasets! It matches or surpasses the performance of full SEPT on DAPT-ed PLM (DAPT→SEPT) while substantially reducing training costs. (2/🧵)

Need a lightweight solution for few-shot domain-specific sentence classification?

We propose !
🚀 Up to 7.2 acc. gain in 8-shot classification with 10K unlabeled data
🪶 Small backbone with 82M parameters
🧩 Reusable general sentence adapter across domains
(1/🧵)

Continued thread

We also illustrate how our semantic retrieval pipeline provides interpretability of the symptom estimation, highlighting the most relevant sentences. (8/🧵)

Continued thread

With this aim, we introduce two data selection strategies to detect representative sentences, both unsupervised & semi-supervised.

For the latter, we propose an annotation schema to obtain relevant training samples. (6/🧵)