New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Convert directly from llama3 #4268
Open
pdevine
wants to merge
8
commits into
mxyng/fix-quantize
Choose a base branch
from
pdevine/llama3
base: mxyng/fix-quantize
Could not load branches
Branch not found: {{ refName }}
Could not load tags
Nothing to show
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
+335
−307
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
mxyng
force-pushed
the
pdevine/llama3
branch
8 times, most recently
from
May 16, 2024 23:53
9b83ecb
to
27588a7
Compare
mxyng
force-pushed
the
mxyng/cache-intermediate-layers
branch
from
May 17, 2024 18:38
39efb30
to
8d807d7
Compare
mxyng
force-pushed
the
mxyng/cache-intermediate-layers
branch
from
May 17, 2024 18:40
8d807d7
to
0aba2d5
Compare
mxyng
changed the base branch from
mxyng/cache-intermediate-layers
to
mxyng/fix-quantize
May 17, 2024 18:48
Updated the safetensors and pytorch conversion interfaces to take F32, F16, and BF16 inputs. This allows this change to convert llama3 derivatives such as nvidia's ChatQA and NousResearch's Hermes 2 Pro |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This change allows you to convert directly from a llama3 derived safetensors model into Ollama.
It is currently missing:
This will work with some llama3 derivatives if they are using safetensors including
dolphin-2.9-llama3
.