Following papers are accepted to NeurIPS 2024:
- How Do Large Language Models Acquire Factual Knowledge During Pretraining?
- Aligning to Thousands of Preferences via System Message Generalization
Following papers are accepted to EMNLP 2024:
- Prometheus 2: An Open Source Language Model Specialized in Evaluating Other Language Models
- Hierarchical Deconstruction of LLM Reasoning: A Graph-Based Framework for Analyzing Knowledge Utilization
- Exploring the Practicality of Generative Retrieval on Dynamic Corpora
- On Efficient Language and Vision Assistants for Visually-Situated Natural Language Understanding: What Matters in Reading and Reasoning
- Rethinking the Role of Proxy Rewards in Language Model Alignment
- Instruction Matters, a Simple yet Effective Task Selection Approach in Instruction Tuning for Specific Tasks
The following paper is accepted to EMNLP 2024 Findings:
- Self-Explore to Avoid the Pit: Improving the Reasoning Capabilities of Language Models with Fine-grained Rewards
Following papers are accepted to ACL 2024:
- Semiparametric Token-Sequence Co-Supervision
- LangBridge: Multilingual Reasoning Without Multilingual Supervision
- Aligning Large Language Models by On-Policy Self-Judgment
- Multi-Task Inference: Can Large Language Models Follow Multiple Instructions at Once?
- ListT5: Listwise Reranking with Fusion-in-Decoder Improves Zero-shot Retrieval
The following paper is accepted to ACL 2024 Findings:
- Prometheus-Vision: Vision-Language Model as a Judge for Fine-Grained Evaluation
Following papers are accepted to NAACL 2024:
- REPLUG: Retrieval-Augmented Black-Box Language Models
- Volcano: Mitigating Multimodal Hallucination through Self-Feedback Guided Revision
- KTRL+F: Knowledge-Augmented In-Document Search
- How Well Do Large Language Models Truly Ground?
- Carpe diem: On the Evaluation of World Knowledge in Lifelong Language Models
We are welcoming Jinho Park (MS), Juyoung Suk (MS), Hyeonbin Hwang (MS), Seongyun Lee (MS). We are also welcoming MS->PhD conversion of Hoyeon Chang.
Hanseok Oh (MS) has graduated.
Improving Probability-based Prompt Selection Through Unified Evaluation and Analysis by Sohee Yang et al. is accepted to TACL 2024. [code]