LoFT: Parameter-Efficient Fine-Tuning for Long-tailed Semi-Supervised Learning in Open-World Scenarios
Anzeige
Ähnliche Artikel
arXiv – cs.LG
•
Foundational Models and Federated Learning: Survey, Taxonomy, Challenges and Practical Insights
arXiv – cs.AI
•
LLM‑KGFR: Neue Methode für Wissensgraph‑Fragen ohne Feinabstimmung
arXiv – cs.LG
•
Graphbasierte Strukturen und Adapter verbessern Feinabstimmung von Modellen
arXiv – cs.LG
•
Regularization Through Reasoning: Systematic Improvements in Language Model Classification via Explanation-Enhanced Fine-Tuning
arXiv – cs.LG
•
Disentangling Causal Substructures for Interpretable and Generalizable Drug Synergy Prediction
arXiv – cs.LG
•
ODP-Bench: Benchmarking Out-of-Distribution Performance Prediction