INT4 LoRA fine-tuning vs QLoRA: A user inquired about the variations amongst INT4 LoRA good-tuning and QLoRA in terms of precision and speed. A further member explained that QLoRA with HQQ involves frozen quantized weights, isn't going to use tinnygemm, and makes use of dequantizing together with torch.matmulHyperlink talked about: The following tu