Skip to content

Conversation

@ochafik
Copy link
Collaborator

@ochafik ochafik commented Aug 23, 2023

This updates examples/convert-llama2c-to-ggml (@byte-6174) to be fully GGUF-friendly (cf. #2398), as a follow up to #2685:

  • Reinstate vocabulary import from a llama.cpp model (now in GGUF format)
  • Directly output GGUF instead of GGJTv3

Tested w/ following commands:

make clean && LLAMA_DEBUG=1 make -j main convert-llama2c-to-ggml

# Read & convert vocab from llama2.c/tokenizer.bin
./convert-llama2c-to-ggml \
    --copy-vocab-from-model ../llama2.c/tokenizer.bin \
    --llama2c-model stories42M.bin \
    --llama2c-output-model stories42M.gguf.converted-vocab.bin && \
  ./main -m stories42M.gguf.converted-vocab.bin -p "One day, Lily met a Shoggoth" -n 500 -c 256 --ignore-eos

# Copy vocab from an existing llama GGUF model
./convert-llama2c-to-ggml \
    --copy-vocab-from-model llama-2-7b-chat.gguf.q2_K.bin \
    --llama2c-model stories42M.bin \
    --llama2c-output-model stories42M.gguf.copied-vocab.bin && \
  ./main -m stories42M.gguf.copied-vocab.bin -p "One day, Lily met a Shoggoth" -n 500 -c 256 --ignore-eos

@ochafik ochafik changed the title [Draft] llama2.c: direct gguf output (WIP) Update llama2.c converter to read vocab and write models in GGUF format Aug 26, 2023
@ochafik ochafik marked this pull request as ready for review August 26, 2023 22:11
Copy link
Member

@ggerganov ggerganov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice work - tested it and it works 🦙

@ggerganov ggerganov merged commit 230d46c into ggml-org:master Aug 27, 2023
@byte-6174
Copy link
Contributor

indeed, @ochafik nice quick turnaround!

@ochafik
Copy link
Collaborator Author

ochafik commented Aug 28, 2023

Thanks guys!!

akawrykow pushed a commit to akawrykow/llama.cpp that referenced this pull request Aug 29, 2023
…n GGUF format (ggml-org#2751)

* llama2.c: direct gguf output (WIP)

* Simplify vector building logic

* llama2.c gguf conversion: fix token types in converter

* llama2.c: support copying vocab from a llama gguf model file

* llama2.c: update default path for vocab model + readme

* llama2.c: use defines for gguf keys

* llama2.c: escape whitespaces w/ U+2581 in vocab converter the llama.cpp way

* llama2.c converter: cleanups + take n_ff from config
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants