forked from ggerganov/llama.cpp
-
Notifications
You must be signed in to change notification settings - Fork 360
Issues: LostRuins/koboldcpp
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Enhancement: Speculative decoding – load 2 models at the same time!
enhancement
New feature or request
#1207
opened Nov 9, 2024 by
aleksusklim
[Question] Any plans to support models other than GGUF and modalities other than QnA ?
#1197
opened Nov 2, 2024 by
yurivict
String interpolation in antislop sampler
enhancement
New feature or request
#1186
opened Oct 23, 2024 by
morbidCode
Clicking Abort during processing of a long prompt can leave the context broken on subsequent generations
#1178
opened Oct 21, 2024 by
actually-a-cat
KoboldCPP crashes after Arch system update when loading GGUF model: ggml_cuda_host_malloc ... invalid argument
#1158
opened Oct 12, 2024 by
YajuShinki
Certain characters generated by the model are not present in the stream data for Japanese text
#1156
opened Oct 12, 2024 by
ceruleandeep
No output when using command line
bug
Something isn't working
#1135
opened Sep 23, 2024 by
amusingCloud
Previous Next
ProTip!
no:milestone will show everything without a milestone.