We are welcoming Geewook Kim (PhD), Dongkeun Yoon (MS+PhD) and Suehyun Park(MS).
Joel Jang (MS), Soyoung Yoon (MS), Yongrae Jo (MS), and Eunbi Choi (MS) have graduated.
Following papers are accepted to ACL 2023:
- Knowledge Unlearning for Mitigating Privacy Risks in Language Models
- Towards standardizing Korean Grammatical Error Correction: Datasets and Annotation
- Gradient Ascent Post-training Enhances Language Model Generalization
Following papers are accepted to ACL 2023 Findings:
- Nonparametric Decoding for Generative Retrieval
- Fixed Input Parameterization for Efficient Prompting
- Two Examples are Better than One: Context Regularization for Gradient-based Prompt Tuning
- Comparing and Contrasting Claims on Contentious Issues
Exploring the Benefits of Training Expert Language Models over Instruction Tuning by Joel Jang et al. is accepted to ICML 2023. [code]
We are welcoming Doyoung Kim (MS), Seungone Kim (MS), and Jiyeon Kim (MS). We are also welcoming MS->MS+PhD conversion of Seonghyeon Ye and MS->PhD conversion of Hyunji Lee.
Sohee is joining UCL / DeepMind as a PhD Student / Research Scientist Intern.
Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners by Seonghyeon Ye et al. is accepted to ICLR 2023. [code] [demo]