-
-
Notifications
You must be signed in to change notification settings - Fork 728
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Using llama-cpp-python #485
Comments
Is this via Colab or Kaggle or local machines? |
I am using a local machine. Actually, I was able to save the model using |
@knc6 Can you please please elaborate on the steps you took to save the model after getting a broken llama.cpp error? As for the gguf file you can use a lot of different software available. Simplest one would be to use llama.cpp directly. Most of he other CUI/GUI also use it in the backend. |
@erwe324 Oh no :( Still GGUF issues? :( |
@danielhanchen However, the same code runs without any issues or modification on Google Collab. I am pre occupied but soon I will try to identify the root cause for this. Most probably this is due to some dependency for llama cpp |
@erwe324 Same issue, the script works on colab but not on local machine. Perhaps a conda-package/environment, nix, docker container for unsloth would be useful. |
Yes a Docker env would be useful. How about if you do a manual one?
|
Hi,
Thanks for creating this wonderful package!
The save_to_gguf currently fails because llama.ccp installation seems to be broken.
Could something like llama-cpp-python be used instead?
The text was updated successfully, but these errors were encountered: