Improving Fisher Information Estimation and Efficiency for LoRA-based LLM Unlearning
Anzeige
Ähnliche Artikel
arXiv – cs.LG
•
Hierarchisches Federated Unlearning für große Sprachmodelle
arXiv – cs.AI
•
Activation Manifold Projection: Liberating Task-Specific Behaviors from LLM Architectures
arXiv – cs.LG
•
Neues Konzept: Asymmetrische LoRA-Strategien verbessern LLM-Fine‑Tuning
arXiv – cs.LG
•
Self-Evolving LLMs via Continual Instruction Tuning
arXiv – cs.AI
•
SpeechLLM: Unified Speech and Language Model for Enhanced Multi-Task Understanding in Low Resource Settings
arXiv – cs.LG
•
Metamorphosis Representation Projection: Unlearning für sichere LLMs