News

Our Latest News

2024

[Oct 4, 2024] 2 papers are accepted to NeurIPS 2024.

Following papers are accepted to NeurIPS 2024:
- How Do Large Language Models Acquire Factual Knowledge During Pretraining?
- Aligning to Thousands of Preferences via System Message Generalization

[Feb 28, 2024] 4 new grad students have joined our lab.

We are welcoming Jinho Park (MS), Juyoung Suk (MS), Hyeonbin Hwang (MS), Seongyun Lee (MS). We are also welcoming MS->PhD conversion of Hoyeon Chang.

[Feb 28, 2024] 1 grad student has graduated.

Hanseok Oh (MS) has graduated.

[Jan 22, 2024] 1 paper is accepted to TACL 2024.

Improving Probability-based Prompt Selection Through Unified Evaluation and Analysis by Sohee Yang et al. is accepted to TACL 2024. [code]

2023

[Dec 12, 2023] 1 paper is accepted to AAAI 2024.

Investigating the Effectiveness of Task-Agnostic Prefix Prompt for Instruction Following by Seonghyeon Ye et al. is accepted to AAAI 2024. [code]

[Nov 27, 2023] Seonghyeon Ye has received the Qualcomm Innovation Fellowship Korea 2023 Award.

Glad to share that Seonghyeon Ye has received the Qualcomm Innovation Fellowship Korea 2023 Award! [link]

[Sep 25, 2023] 1 paper is accepted to NeurIPS 2023.

A Bayesian Perspective On Training Data Attribution by Elisa Nguyen et al. is accepted to NeurIPS 2023. [code]

[Aug 28, 2023] 3 new grad students have joined our lab.

We are welcoming Geewook Kim (PhD), Dongkeun Yoon (MS+PhD) and Suehyun Park(MS).

[Aug 28, 2023] 4 grad students have graduated.

Joel Jang (MS), Soyoung Yoon (MS), Yongrae Jo (MS), and Eunbi Choi (MS) have graduated.

[Apr 25, 2023] 1 paper is accepted to ICML 2023.

Exploring the Benefits of Training Expert Language Models over Instruction Tuning by Joel Jang et al. is accepted to ICML 2023. [code]

[Feb 27, 2023] 3 new grad students have joined our lab.

We are welcoming Doyoung Kim (MS), Seungone Kim (MS), and Jiyeon Kim (MS). We are also welcoming MS->MS+PhD conversion of Seonghyeon Ye and MS->PhD conversion of Hyunji Lee.

[Apr 25, 2023] Sohee Yang has graduated (MS).

Sohee is joining UCL / DeepMind as a PhD Student / Research Scientist Intern.

[Jan 25, 2023] 1 paper is accepted to ICLR 2023.

Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners by Seonghyeon Ye et al. is accepted to ICLR 2023. [code] [demo]

2022

[Nov 7, 2022] Joel Jang has received the Qualcomm Innovation Fellowship Korea 2022 Award.

Glad to share that Joel Jang has received the Qualcomm Innovation Fellowship Korea 2022 Award! [link]

[Oct 28, 2022] Minjoon will give a talk at Samsung AI Forum 2022.

Minjoon will give a talk at Samsung AI Forum 2022 on the topic of Generative Retrieval. [news]

[Oct 20, 2022] 1 paper accepted to NeurIPS 2022 Workshop on Transfer Learning for NLP.

Following paper is accepted to NeurIPS 2022 Workshop on Transfer Learning for NLP:
- Can Large Language Models Truly Understand Prompts? A Case Study with Negated Prompts

[Sept 17, 2022] 2 papers accepted to NeurIPS 2022 Datasets and Benchmarks.

Following papers are accepted to NeurIPS 2022 Datasets and Benchmark:
- A Multi-Task Benchmark for Korean Legal Language Understanding and Judgement Prediction
- EHRSQL: A Practical Text-to-SQL Benchmark for Electronic Health Records

[Sept 2, 2022] 2022-2023 Winter Internship (KAIRI) application is now open.

Please see here for instructions. The deadline is 2022.09.12.

[Aug 30, 2022] “AI for Law” course is featured at KAIST News.

We are teaching a new AI+X course “AI for Law” in Fall 2022 semester. It is featured at KAIST News. [link]

[Aug 29, 2022] 3 new grad students have joined our lab.

We are welcoming Hoyeon Chang (MS+PhD), Sungdong Kim (MS+PhD), and Hyowon Cho (MS).

[Feb 28, 2022] 2 new grad students have joined our lab.

We are welcoming Haebin Shin (MS, Samsung Research) and Seonghyeon Ye (MS). We are also welcoming MS → MS+PhD conversion of Joel Jang.

[Jan 24, 2022] 1 paper is accepted to ICLR 2022.

Towards Continual Knowledge Learning of Language Models by Joel Jang et al. is accepted to ICLR 2022.

2021

[Sept 27, 2021] LK Lab has won VALUE Challenge Retrieval Track at ICCV 2021.

KAIST LK Lab (Hanseok Oh and Minjoon Seo) and Twelve Labs (Aiden Lee) have won VALUE Challenge Retrieval Track at ICCV 2021. The results are published at ICCV21 CLVL Workshop: ViSeRet: A simple yet effective approach to moment retrieval via fine-grained video segmentation.

[Aug 30, 2021] 2 new grad students have joined our lab.

We are welcoming Hanseok Oh (MS) and Yongrae Jo (MS)!

[Jun 2, 2021] 1 paper is accepted to Interspeech 2021.

Label Embedding for Chinese Grapheme-to-Phoneme Conversion by Eunbi Choi et al. is accepted to Interspeech 2021.

[May 5, 2021] 2 papers are accepted to ACL 2021 Findings.

Following papers are accepted to ACL 2021 Findings:
- Spatial Dependency Parsing for Semi-Structured Document Information Extraction by Hwang et al. (Sohee Yang and Minjoon Seo)
- SSMix: Saliency-based Span Mixup for Text Classification by Soyoung Yoon et al.

[Mar 10, 2021] 1 paper is accepted to NAACL 2021.

Designing a Minimal Retrieve-and-Read System for Open-Domain Question Answering by Sohee Yang and Minjoon Seo is accepted to NAACL 2021 as a short paper.

[Mar 1, 2021] Today is the first day of LK Lab.

We are welcoming seven starting members of the lab: Sohee Yang (MS+PhD), Miyoung Ko (PhD), Soyoung Yoon (MS), Joel Jang (MS), Jinkyung Jo (PhD), Eunbi Choi (MS), and Hyunji Lee (MS).