Stars
Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)
[ICLR 2025] FLAT: LLM Unlearning via Loss Adjustment with Only Forget Data
[ICLR 2025] Improving Data Efficiency via Curating LLM-Driven Rating Systems
[NeurIPS'24] Fairness Without Harm: An Influence-Guided Active Sampling Approach
[DAI 2023] Official Pytorch implementation of "Auditing for federated learning: A model elicitation approach"
Robust recipes to align language models with human and AI preferences
A framework for few-shot evaluation of language models.