Time | Location | Activity | ||
---|---|---|---|---|
Demo Session 1, Feb 22 Thu, 19:00-21:00 | Exhibit Hall AB1 | Demo presentation: MIDDAG: Where Does Our News Go? Investigating Information Diffusion via Community-Level Information Pathways. We present an intuitive, interactive system that visualizes the information propagation paths on social media triggered by COVID-19-related news articles accompanied by comprehensive insights including user/community susceptibility level, as well as events and popular opinions raised by the crowd while propagating the information. | ||
Poster Session 2, Feb 23 Fri, 19:00-21:00 | Exhibit Hall AB1 | Poster presentation of the paper: STAR: Improving Low-Resource Information Extraction by Structure-to-Text Data Generation with Large Language Models. We present a structure-to-text data generation method for complicated structure prediction tasks that first generates complicated event structures (Y) and then generates input passages (X), all with Large Language Models. We show that the data generated by STAR significantly improves the performance of low-resource event extraction and relation extraction tasks, even surpassing the effectiveness of human-curated data. |
Time | Location | Activity | ||
---|---|---|---|---|
July 10, 11:00-12:30 (EDT) | Metropolitan East | Oral presentation of the main conf paper: DICE: Data-Efficient Clinical Event Extraction with Generative Models. We introduce a data-efficient generative model with specialized mention boundary predictions for clinical event extraction. We also propose MACCROBAT-EE, the first benchmark in the domain. | ||
July 10, 19:00-21:00 (EDT) | Metropolitan Centre | Findings Spotlights of Multi-hop Evidence Retrieval for Cross-document Relation Extraction. We propose a multi-hop evidence retrieval method based on evidence path mining and ranking with dense retrievers, which shows great cross-document relation extraction performance. | ||
July 11, 16:15-17:45 (EDT) | Metropolitan Centre | Oral presentation of the main conf paper: Can NLI Provide Proper Indirect Supervision for Low-resource Biomedical Relation Extraction? We leverage additional cross-domain supervision signals from Natural Language Inference on the Relation Extraction task. We show that it achieves state-of-the-art biomedical RE performance. | ||
July 14, 11:40-12:30 (EDT) | – | Poster presentation at the TrustNLP: Third Workshop on Trustworthy Natural Language Processing |