PreLoRA: Hybrid Pre-training of Vision Transformers with Full Training and Low-Rank Adapters
Anzeige
Ähnliche Artikel
arXiv – cs.LG
•
TinyGraphEstimator: Adapting Lightweight Language Models for Graph Structure Inference
arXiv – cs.AI
•
Efficiency vs. Alignment: Investigating Safety and Fairness Risks in Parameter-Efficient Fine-Tuning of LLMs
arXiv – cs.LG
•
ScaLoRA: Optimally Scaled Low-Rank Adaptation for Efficient High-Rank Fine-Tuning
arXiv – cs.AI
•
Preventing Shortcuts in Adapter Training via Providing the Shortcuts
arXiv – cs.LG
•
PLAN: Proactive Low-Rank Allocation for Continual Learning
arXiv – cs.LG
•
L-MoE: End-to-End Training of a Lightweight Mixture of Low-Rank Adaptation Experts