模型合并与多 LoRA 管理
High Contrast
Dark Mode
Light Mode
Sepia
Forest
1 min read230 words

模型合并与多 LoRA 管理

生产环境中常需要多个 LoRA 适配器(不同任务/语言/场景)。如何高效管理和动态切换?

多 LoRA 架构

graph TB A[基座模型] --> B[LoRA-客服] A --> C[LoRA-代码] A --> D[LoRA-翻译] A --> E[LoRA-摘要] F[请求路由] --> B F --> C F --> D F --> E style A fill:#e3f2fd,stroke:#1976d2,stroke-width:3px style F fill:#fff3e0,stroke:#f57c00,stroke-width:2px

LoRA 管理框架

"""
多 LoRA 适配器管理
"""
from dataclasses import dataclass, field
from enum import Enum
from typing import Any
import time
class AdapterStatus(Enum):
LOADED = "loaded"
UNLOADED = "unloaded"
LOADING = "loading"
@dataclass
class LoRAAdapter:
"""LoRA 适配器"""
name: str
path: str
task: str                # 客服/代码/翻译等
r: int = 16
version: str = "v1"
status: AdapterStatus = AdapterStatus.UNLOADED
last_used: float = field(default_factory=time.time)
request_count: int = 0
class MultiLoRAManager:
"""多 LoRA 管理器"""
def __init__(self, max_loaded: int = 4):
self.max_loaded = max_loaded
self._adapters: dict[str, LoRAAdapter] = {}
self._loaded: list[str] = []
def register(self, adapter: LoRAAdapter):
"""注册适配器"""
self._adapters[adapter.name] = adapter
def get_adapter(self, name: str) -> LoRAAdapter:
"""获取并加载适配器"""
adapter = self._adapters.get(name)
if not adapter:
raise KeyError(f"Adapter '{name}' not found")
if adapter.status != AdapterStatus.LOADED:
self._load_adapter(adapter)
adapter.last_used = time.time()
adapter.request_count += 1
return adapter
def _load_adapter(self, adapter: LoRAAdapter):
"""加载适配器(LRU 淘汰)"""
# 如果已满,卸载最久未用的
while len(self._loaded) >= self.max_loaded:
self._evict_lru()
adapter.status = AdapterStatus.LOADED
self._loaded.append(adapter.name)
def _evict_lru(self):
"""LRU 淘汰"""
if not self._loaded:
return
oldest_name = min(
self._loaded,
key=lambda n: self._adapters[n].last_used
)
self._adapters[oldest_name].status = AdapterStatus.UNLOADED
self._loaded.remove(oldest_name)
def list_adapters(self) -> list[dict]:
"""列出所有适配器"""
return [
{
"name": a.name,
"task": a.task,
"version": a.version,
"status": a.status.value,
"requests": a.request_count,
}
for a in self._adapters.values()
]

模型合并策略

"""
LoRA 合并策略
"""
from dataclasses import dataclass
from enum import Enum
class MergeMethod(Enum):
LINEAR = "linear"           # 线性插值
TIES = "ties"               # TIES-Merging
DARE = "dare"               # DARE 方法
TASK_ARITHMETIC = "task_arithmetic"  # 任务算术
@dataclass
class MergeConfig:
"""合并配置"""
method: MergeMethod = MergeMethod.LINEAR
adapters: list[str] = None  # 要合并的适配器路径
weights: list[float] = None  # 各适配器权重
density: float = 0.5        # TIES/DARE 的密度参数
MERGE_COMPARISON = {
MergeMethod.LINEAR: {
"公式": "W_merged = Σ(w_i × LoRA_i)",
"适用": "任务相近的 LoRA",
"质量": "★★★☆☆",
"简单度": "★★★★★",
},
MergeMethod.TIES: {
"公式": "修剪冲突参数 → 符号投票 → 合并",
"适用": "任务差异较大",
"质量": "★★★★☆",
"简单度": "★★★☆☆",
},
MergeMethod.DARE: {
"公式": "随机丢弃 → 重缩放 → 合并",
"适用": "多任务融合",
"质量": "★★★★☆",
"简单度": "★★★☆☆",
},
MergeMethod.TASK_ARITHMETIC: {
"公式": "W = W_base + Σ(λ_i × (W_ft_i - W_base))",
"适用": "任务向量加减",
"质量": "★★★★★",
"简单度": "★★★★☆",
},
}

动态路由

graph LR A[用户请求] --> B{任务分类} B -->|客服问题| C[LoRA-客服] B -->|代码相关| D[LoRA-代码] B -->|翻译请求| E[LoRA-翻译] B -->|其他| F[基座模型] style A fill:#e3f2fd,stroke:#1976d2,stroke-width:3px

LoRA 部署方案

方案 切换延迟 显存利用 适用场景
热加载轮换 50-200ms 一次加载 1 个 任务单一
多 LoRA 常驻 < 5ms N 个并存 多任务高频
LoRA 合并 0ms 固定一个 场景稳定
vLLM multi-LoRA < 10ms 框架优化 生产推荐

本章小结

要点 说明
LRU 管理 GPU 显存有限,按访问频率淘汰
动态路由 按请求类型自动选择 LoRA
合并策略 相近任务用线性插值,差异大用 TIES
vLLM 支持 原生 multi-LoRA 推理最优

下一章:版本管理与监控