目录
目录README.md

模型名称:法律大模型

创空间地址:https://www.modelscope.cn/studios/fangliang911/heart_beat_lawyer3

演示demo: https://www.bilibili.com/video/BV1yJSVY3EYi?vd_source=66c46841f36727a87ee54307d702f2e4

微调脚本:基于swift web-ui框架生成如下

SftArguments(model_type=’qwen-7b-chat’, model_id_or_path=’qwen/Qwen-7B-Chat’, model_revision=’master’, full_determinism=False, sft_type=’lora’, freeze_parameters=[], freeze_vit=False, freeze_parameters_ratio=0.0, additional_trainable_parameters=[], tuner_backend=’peft’, template_type=’qwen’, output_dir=’/mnt/workspace/output/qwen-7b-chat/v1-20240910-085616’, add_output_dir_suffix=False, ddp_backend=None, ddp_find_unused_parameters=None, ddp_broadcast_buffers=None, ddp_timeout=1800, seed=42, resume_from_checkpoint=None, resume_only_model=False, ignore_data_skip=False, dtype=’bf16’, packing=False, train_backend=’transformers’, tp=1, pp=1, min_lr=None, sequence_parallel=False, model_kwargs=None, loss_name=None, dataset=[‘lawyer-llama-zh’], val_dataset=[], dataset_seed=42, dataset_test_ratio=0.01, use_loss_scale=False, loss_scale_config_path=’/usr/local/lib/python3.10/site-packages/swift/llm/agent/default_loss_scale_config.json’, system=’You are a helpful assistant.’, tools_prompt=’react_en’, max_length=2048, truncation_strategy=’delete’, check_dataset_strategy=’none’, streaming=False, streaming_val_size=0, streaming_buffer_size=16384, model_name=[None, None], model_author=[None, None], quant_method=None, quantization_bit=0, hqq_axis=0, hqq_dynamic_config_path=None, bnb_4bit_comp_dtype=’bf16’, bnb_4bit_quant_type=’nf4’, bnb_4bit_use_double_quant=True, bnb_4bit_quant_storage=None, rescale_image=-1, target_modules=[‘ALL’], target_regex=None, modules_to_save=[], lora_rank=8, lora_alpha=32, lora_dropout=0.05, lora_bias_trainable=’none’, lora_dtype=’AUTO’, lora_lr_ratio=None, use_rslora=False, use_dora=False, init_lora_weights=’True’, fourier_n_frequency=2000, fourier_scaling=300.0, rope_scaling=None, boft_block_size=4, boft_block_num=0, boft_n_butterfly_factor=1, boft_dropout=0.0, vera_rank=256, vera_projection_prng_key=0, vera_dropout=0.0, vera_d_initial=0.1, adapter_act=’gelu’, adapter_length=128, use_galore=False, galore_target_modules=None, galore_rank=128, galore_update_proj_gap=50, galore_scale=1.0, galore_proj_type=’std’, galore_optim_per_parameter=False, galore_with_embedding=False, galore_quantization=False, galore_proj_quant=False, galore_proj_bits=4, galore_proj_group_size=256, galore_cos_threshold=0.4, galore_gamma_proj=2, galore_queue_size=5, adalora_target_r=8, adalora_init_r=12, adalora_tinit=0, adalora_tfinal=0, adalora_deltaT=1, adalora_beta1=0.85, adalora_beta2=0.85, adalora_orth_reg_weight=0.5, ia3_feedforward_modules=[], llamapro_num_new_blocks=4, llamapro_num_groups=None, neftune_noise_alpha=None, neftune_backend=’transformers’, lisa_activated_layers=0, lisa_step_interval=20, reft_layer_key=None, reft_layers=None, reft_rank=4, reft_intervention_type=’LoreftIntervention’, reft_args=None, use_liger=False, gradient_checkpointing=True, deepspeed=None, batch_size=1, eval_batch_size=1, auto_find_batch_size=False, num_train_epochs=1, max_steps=-1, optim=’adamw_torch’, adam_beta1=0.9, adam_beta2=0.95, adam_epsilon=1e-08, learning_rate=0.0001, weight_decay=0.1, gradient_accumulation_steps=16, max_grad_norm=1, predict_with_generate=False, lr_scheduler_type=’cosine’, lr_scheduler_kwargs={}, warmup_ratio=0.05, warmup_steps=0, eval_steps=500, save_steps=500, save_only_model=False, save_total_limit=2, logging_steps=5, acc_steps=1, dataloader_num_workers=1, dataloader_pin_memory=True, dataloader_drop_last=False, push_to_hub=False, hub_model_id=None, hub_token=None, hub_private_repo=False, hub_strategy=’every_save’, test_oom_error=False, disable_tqdm=False, lazy_tokenize=False, preprocess_num_proc=1, use_flash_attn=None, ignore_args_error=True, check_model_is_latest=True, logging_dir=’/mnt/workspace/output/qwen-7b-chat/v1-20240910-085616/runs’, report_to=[‘tensorboard’], acc_strategy=’token’, save_on_each_node=False, evaluation_strategy=’steps’, save_strategy=’steps’, save_safetensors=True, gpu_memory_fraction=None, include_num_input_tokens_seen=False, local_repo_path=None, custom_register_path=None, custom_dataset_info=None, device_map_config=None, device_max_memory=[], max_new_tokens=2048, do_sample=None, temperature=None, top_k=None, top_p=None, repetition_penalty=None, num_beams=1, fsdp=’’, fsdp_config=None, sequence_parallel_size=1, model_layer_cls_name=None, metric_warmup_step=0, fsdp_num=1, per_device_train_batch_size=None, per_device_eval_batch_size=None, eval_strategy=None, self_cognition_sample=0, train_dataset_mix_ratio=0.0, train_dataset_mix_ds=[‘ms-bench’], train_dataset_sample=-1, val_dataset_sample=None, safe_serialization=None, only_save_model=None, neftune_alpha=None, deepspeed_config_path=None, model_cache_dir=None, lora_dropout_p=None, lora_target_modules=[‘ALL’], lora_target_regex=None, lora_modules_to_save=[], boft_target_modules=[], boft_modules_to_save=[], vera_target_modules=[], vera_modules_to_save=[], ia3_target_modules=[], ia3_modules_to_save=[], custom_train_dataset_path=[], custom_val_dataset_path=[], device_map_config_path=None, push_hub_strategy=None)

推理脚本 在modelscope notebook中,8核 32GB 显存24G,基于swift web-ui框架生成如下

pid:1915/create:2024-09-10, 21:34/running:2s/cmd:/usr/local/bin/python /usr/local/bin/swift deploy –model_type qwen-7b-chat –do_sample True –temperature 0.3 –top_k 20 –top_p 0.7 –repetition_penalty 1.05 –port 8080 –ckpt_dir /mnt/workspace/.cache/modelscope/hub/fangliang911/lawyer0910 –sft_type lora –log_file /mnt/workspace/output/qwen-7b-chat-2024910213458/run_deploy.log –ignore_args_error true

推理效果图如下:

alt text

关于
846.0 KB
邀请码
    Gitlink(确实开源)
  • 加入我们
  • 官网邮箱:gitlink@ccf.org.cn
  • QQ群
  • QQ群
  • 公众号
  • 公众号

©Copyright 2023 CCF 开源发展委员会
Powered by Trustie& IntelliDE 京ICP备13000930号