News

New preprint on LLM ownership protection

In InstructionalFingerprint, we present a pilot study on LLM fingerprinting as a form of very lightweight instruction tuning. Model publisher specifies a confidential private key and implants it as an instruction backdoor that causes the LLM to generate specific text when the key is present. Results on 11 popularly-used LLMs showed that this approach is lightweight and does not affect the normal behavior of the model.

New preprint on bias mitigation

In BMBI, we propose to mitigate bias exhibited in QA models by observing the query instanceโ€™s influence on another instance, enabling bias mitigation with extremely low resources. With our method, bias levels in multiple bias categories can be reduced without using category-specific instance-level annotation.

Presenting at INTERSPEECH 2023 ๐Ÿ‡ฎ๐Ÿ‡ช

Oral presentation of the conf paper: Parameter-Efficient Low-Resource Dialogue State Tracking by Prompt Tuning. In the collaboration work with Amazon Alexa AI, we introduce a dialogue state tracking model tuning less than 1% of LM parameters and achieves better low-resource performance with prompt tuning techniques.

Data Science for ALL

Check out course materials for the 2-week summer program “Data Science for ALL” that were just delivered by our joint team from UCI and UCLA!

New preprints on data generation with LLMs and LLM's backdoor attack

In STAR, we propose to synthesize training data by structure-to-text generation using Large Language Models and we show that the generated data is even more effective than human-curated data instances to boost the low-resource event extraction performance. In the new study about LLM’s backdoor attack, we demonstrate that an attacker can inject backdoors by issuing very few malicious instructions and control model behavior through data poisoning.

New INTERSPEECH paper ๐Ÿ‡ฎ๐Ÿ‡ช

New INTERSPEECH paper! In the collaboration work with Amazon Alexa AI, we introduce a dialogue state tracking model tuning less than 1% of LM parameters and achieves better low-resource performance with prompt tuning techniques.