Presenting three papers on bias, fairness and safety of Large Language Models at NAACL 2024 in Mexico City, including detecting and mitigating bias in QA models with ground-truth bias labels, fingerprinting LLMs, and a pilot study on injecting backdoors by instruction tuning data poisoning. Click for schedule and location details.
Jun 15, 2024
We present our demo system EventPlus at NAACL 2021, the live system and code is released. [Check it out](https://kairos-event.isi.edu)!
Jun 1, 2021