I am a final-year PhD candidate in Computer Science at UCLA, working with Prof. Wei Wang. I’m also a machine learning scientist at the Genentech Prescient Design Language Model team, working with Dr. Keunwoo Choi, Dr. Stephen Ra, and Prof. Kyunghyun Cho. I’m a recipient of the J.P. Morgan Chase AI PhD Fellowship and an Amazon Fellow.
I’ve worked at Amazon AGI, USC (working with Prof. Nanyun (Violet) Peng and Prof. Muhao Chen), The Chinese University of Hong Kong (working with Prof. Helen Meng), UC Santa Cruz (working with Prof. Marilyn Walker) and MIT (working with Dr. Abel Sanchez and Prof. John R. Williams). I earned my bachelor’s degree in Computing from The Hong Kong Polytechnic University, advised by Prof. Qin Lu and Prof. Jiannong Cao and studied at the University of Maryland.
I’m interested in the architecture, training, and agentic use of generative language models inspired by and applied to clinical, medical, and scientific scenarios. I’m currently working on equipping the language models with the intuition and knowledge of domain experts, such as clinicians or scientists, and utilizing them as assistants for scientific discovery. Recent works include:
In InstructionalFingerprint, we present a pilot study on LLM fingerprinting as a form of very lightweight instruction tuning. Model publisher specifies a confidential private key and implants it as an instruction backdoor that causes the LLM to generate specific text when the key is present. Results on 11 popularly-used LLMs showed that this approach is lightweight and does not affect the normal behavior of the model.