WebUI¶
LLaMA-Factory supports fine-tuning large language models with zero code through the WebUI. After completing Installation, you can enter the WebUI through the following instructions:
llamafactory-cli webui
The WebUI is mainly divided into four interfaces: Training, Evaluation and Prediction, Conversation, and Export.
Training ¶
Before starting to train the model, the parameters you need to specify are:
Model name and path
Training phase
Fine-tuning method
Training dataset
Training parameters such as learning rate and number of training epochs
Other parameters such as fine-tuning parameters
Output directory and configuration path
Subsequently, you can click the Start button to start training the model.
Remarks
Regarding breakpoint resumption: The adapter breakpoint is saved in the output_dir directory. Please specify the adapter path to load the breakpoint and continue training.
If you need to use a custom dataset, please add a description of the custom dataset in data/data_info.json and ensure that the dataset format is correct. Otherwise, it may cause the training to fail.
Evaluation of Predictions and Conversations¶
After the model training is completed, you can evaluate on the specified dataset by specifying the paths of model and adapter on the evaluation and prediction interface.
You can also observe the effect by entering the dialogue content and having a conversation with the model after specifying the model, adapter, and inference engine in the dialogue interface.
Export ¶
If you are satisfied with the model performance and need to export the model, you can click the Export button on the export interface after specifying parameters such as Model, Adapter, Chunk Size, Export Quantization Level and Calibration Dataset, Export Device, and Export Directory to export the model.