I am a PhD candidate in Computer Science at UCLA working with Prof. Wei Wang. I earned my bachelor’s degree in Computing from The Hong Kong Polytechnic University with First Class Honours in 2018, advised by Prof. Qin Lu and Prof. Jiannong Cao. I studied as an exchange student at the University of Maryland in 2016. I’ve also spent time at Amazon Alexa AI (working with Dr. Jiun-Yu Kao and Dr. Tagyoung Chung), USC Information Sciences Institute (working with Prof. Nanyun (Violet) Peng and Prof. Muhao Chen), The Chinese University of Hong Kong (working with Prof. Helen Meng), UC Santa Cruz (working with Prof. Marilyn Walker) and MIT (working with Dr. Abel Sanchez and Prof. John R. Williams).
I’m interested in Natural Language Processing, Machine Learning and AI4Science. My research focuses on generative language models, especially in the clinical, medical, and science domains:
|Demo Session 1, Feb 22 Thu, 19:00-21:00
|Exhibit Hall AB1
|Demo presentation: MIDDAG: Where Does Our News Go? Investigating Information Diffusion via Community-Level Information Pathways. We present an intuitive, interactive system that visualizes the information propagation paths on social media triggered by COVID-19-related news articles accompanied by comprehensive insights including user/community susceptibility level, as well as events and popular opinions raised by the crowd while propagating the information.
|Poster Session 2, Feb 23 Fri, 19:00-21:00
|Exhibit Hall AB1
|Poster presentation of the paper: STAR: Improving Low-Resource Information Extraction by Structure-to-Text Data Generation with Large Language Models. We present a structure-to-text data generation method for complicated structure prediction tasks that first generates complicated event structures (Y) and then generates input passages (X), all with Large Language Models. We show that the data generated by STAR significantly improves the performance of low-resource event extraction and relation extraction tasks, even surpassing the effectiveness of human-curated data.
|Aug 24 Thu, 10:00-10:20 (IST)
|Wicklow Hall 1
|Oral presentation of the conf paper: Parameter-Efficient Low-Resource Dialogue State Tracking by Prompt Tuning. In the collaboration work with Amazon Alexa AI, we introduce a dialogue state tracking model tuning less than 1% of LM parameters and achieves better low-resource performance with prompt tuning techniques.
University of Maryland, College Park
2016, College Park, MD