Liquid AI Releases LFM2-8B-A1B: An On-Device Mixture-of-Experts with 8.3B Params and a 1.5B Active Params per Token
Anzeige
Ähnliche Artikel
arXiv – cs.LG
•
Opportunistic Expert Activation: Batch-Aware Expert Routing for Faster Decode Without Retraining
arXiv – cs.AI
•
Adaptive Data Flywheel: Applying MAPE Control Loops to AI Agent Improvement
arXiv – cs.LG
•
Mixture-of-Transformers Learn Faster: A Theoretical Study on Classification Problems
MarkTechPost
•
Liquid AI Releases LFM2-ColBERT-350M: A New Small Model that brings Late Interaction Retrieval to Multilingual and Cross-Lingual RAG
MarkTechPost
•
Liquid AI’s LFM2-VL-3B Brings a 3B Parameter Vision Language Model (VLM) to Edge-Class Devices
arXiv – cs.LG
•
SDAR: A Synergistic Diffusion-AutoRegression Paradigm for Scalable Sequence Generation