使用 Google Colab 进行模型微调
概念
Fine-tuning
模型微调(Fine-tuning)是指在预训练模型的基础上,通过特定任务的数据进一步训练,使其适应新任务的过程。
例如:
- 用ImageNet预训练的模型进行特定图像分类任务。
- 用BERT进行情感分析或文本分类。
Fine-tuning Techniques
- SFT(Supervised Fine-Tuning,监督微调)在预训练模型基础上,用有标签数据进一步训练,调整参数。有标签数据实际上就是输入-输出对的数据集。
- RLHF(Reinforcement Learning from Human Feedback,基于人类反馈的强化学习: Trained to maximize rewards based on human evaluations.
- LoRA(Low-Rank Adaptation)是一种用于微调大规模预训练模型的技术。大模型(如GPT、BERT)全参数微调成本高,LoRA通过低秩分解减少参数量,降低资源需求
- QLoRA(Quantized Low-Rank Adaptation)是一种结合量化和低秩适应的技术,用于低成本微调大规模预训练模型。
开始
安装依赖库
# !pip install -q accelerate==0.21.0 peft==0.4.0 bitsandbytes==0.40.2 transformers==4.31.0 trl==0.4.7
# !pip install -q datasets
安装完后,导入包:
# Import necessary packages for the fine-tuning process
import os # Operating system functionalities
import torch # PyTorch library for deep learning
from datasets import load_dataset # Loading datasets for training
from transformers import (
AutoModelForCausalLM, # AutoModel for language modeling tasks
AutoTokenizer, # AutoTokenizer for tokenization
BitsAndBytesConfig, # Configuration for BitsAndBytes
HfArgumentParser, # Argument parser for Hugging Face models
TrainingArguments, # Training arguments for model training
pipeline, # Creating pipelines for model inference
logging, # Logging information during training
)
from peft import LoraConfig, PeftModel # Packages for parameter-efficient fine-tuning (PEFT)
from trl import SFTTrainer # SFTTrainer for supervised fine-tuning
提供token以登录 hugging face 账户:
!huggingface-cli login
![[Pasted image 20250222215903.png]]
# 进行微调的模型
model_name = "NousResearch/Llama-2-7b-hf"
# instruct dataset
dataset_name = "mlabonne/guanaco-llama2-1k"
new_model = "llama-2-7b-miniguanaco"
导入数据集:
dataset = load_dataset(dataset_name, split="train")
BitsAndBytes
配置BitsAndBytes 用来将base model加载为int4量化模型:
compute_dtype = getattr(torch, "float16")
bnb_config = BitsAndBytesConfig(
load_in_4bit=use_4bit,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=compute_dtype,
bnb_4bit_use_double_quant=use_nested_quant,
)
加载预训练模型,保持网络通畅,这会开始下载模型:
# Load the entire model on the GPU 0
device_map = {"": 0}
model = AutoModelForCausalLM.from_pretrained(
model_name,
quantization_config=bnb_config,
device_map=device_map
)
model.config.use_cache = False
model.config.pretraining_tp = 1
Downloading (…)lve/main/config.json: 0%| | 0.00/583 [00:00<?, ?B/s]
Downloading (…)fetensors.index.json: 0%| | 0.00/26.8k [00:00<?, ?B/s]
Downloading shards: 0%| | 0/2 [00:00<?, ?it/s]
Downloading (…)of-00002.safetensors: 0%| | 0.00/9.98G [00:00<?, ?B/s]
Downloading (…)of-00002.safetensors: 0%| | 0.00/3.50G [00:00<?, ?B/s]
Loading checkpoint shards: 0%| | 0/2 [00:00<?, ?it/s]
Downloading (…)neration_config.json: 0%| | 0.00/179 [00:00<?, ?B/s]
配置预训练模型的分词器 tokenizer,直接从预训练模型中加载分词器。 模型如果没有预定义 pad_token 就添加 pad_token 以确保输入序列长度一致。
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
if tokenizer.pad_token is None:
tokenizer.add_special_tokens({'pad_token': '[PAD]'})
tokenizer.pad_token = tokenizer.eos_token
tokenizer.padding_side = "right"
LoRA 配置:
# LoRA Attention dimension
lora_r = 64
# Alpha parameter for LoRA scaling
lora_alpha = 16
# Dropout probability for LoRA layers
lora_dropout = 0.1
peft_config = LoraConfig(
lora_alpha=lora_alpha,
lora_dropout=lora_dropout,
r=lora_r,
bias="none",
task_type="CAUSAL_LM",
)
设置训练参数
# Output directory where the model predictions and checkpoints will be stored
output_dir = "./results"
# Number of training epochs
num_train_epochs = 1
# Enable fp16/bf16 training (set bf16 to True with an A100)
fp16 = False
bf16 = False
# Batch size per GPU for training
per_device_train_batch_size = 4
# Batch size per GPU for evaluation
per_device_eval_batch_size = 4
# Number of update steps to accumulate the gradients for
gradient_accumulation_steps = 1
# Enable gradient checkpointing
gradient_checkpointing = True
# Maximum gradient normal (gradient clipping)
max_grad_norm = 0.3
# Initial learning rate (AdamW optimizer)
learning_rate = 2e-4
# Weight decay to apply to all layers except bias/LayerNorm weights
weight_decay = 0.001
# Optimizer to use
optim = "paged_adamw_32bit"
# Learning rate schedule (constant a bit better than cosine)
lr_scheduler_type = "constant"
# Number of training steps (overrides num_train_epochs)
max_steps = -1
# Ratio of steps for a linear warmup (from 0 to learning rate)
warmup_ratio = 0.03
# Group sequences into batches with same length
# Saves memory and speeds up training considerably
group_by_length = True
# Save checkpoint every X updates steps
save_steps = 25
# Log every X updates steps
logging_steps = 25
training_arguments = TrainingArguments(
output_dir=output_dir,
num_train_epochs=num_train_epochs,
per_device_train_batch_size=per_device_train_batch_size,
gradient_accumulation_steps=gradient_accumulation_steps,
optim=optim,
save_steps=save_steps,
logging_steps=logging_steps,
learning_rate=learning_rate,
weight_decay=weight_decay,
fp16=fp16,
bf16=bf16,
max_grad_norm=max_grad_norm,
max_steps=max_steps,
warmup_ratio=warmup_ratio,
group_by_length=group_by_length,
lr_scheduler_type=lr_scheduler_type,
report_to="tensorboard"
)
SFT 参数
# Maximum sequence length to use
max_seq_length = None
# Pack multiple short examples in the same input sequence to increase efficiency
packing = False
trainer = SFTTrainer(
model=model,
train_dataset=dataset,
peft_config=peft_config,
dataset_text_field="text",
max_seq_length=max_seq_length,
tokenizer=tokenizer,
args=training_arguments,
packing=packing,
)
开始训练
trainer.train()
trainer.model.save_pretrained(new_model)