New INTERSPEECH paper! In the collaboration work with Amazon Alexa AI, we introduce a dialogue state tracking model tuning less than 1% of LM parameters and achieves better low-resource performance with prompt tuning techniques.
Parameter-Efficient Low-Resource Dialogue State Tracking by Prompt TuningINTERSPEECH, 2023 & ENLSP at NeurIPS, 2022
We use soft prompt tokens to learn task properties, incorporate segment information and reiterate the task before predicting value. Our method drastically reduces the number of parameters needed to less than 0.5% of prior works while achieving better low-resource dialogue state tracking performance.