Evolution of meta's llama models and parameter-efficient fine-tuning of large language models: a survey
Anzeige
Ähnliche Artikel
arXiv – cs.LG
•
Fine-tuning of Large Language Models for Domain-Specific Cybersecurity Knowledge
arXiv – cs.LG
•
LoRALib: A Standardized Benchmark for Evaluating LoRA-MoE Methods
arXiv – cs.LG
•
PGF-Net: Gated-Fusion-Framework für effiziente multimodale Sentimentanalyse
arXiv – cs.LG
•
Naive LoRA‑Summation: Orthogonalität nutzt effizientes Modulare Lernen
Hugging Face – Blog
•
Fast LoRA inference for Flux with Diffusers and PEFT
arXiv – cs.AI
•
Efficiency vs. Alignment: Investigating Safety and Fairness Risks in Parameter-Efficient Fine-Tuning of LLMs