* remove c code * pack llama.cpp * use request context for llama_cpp * let llama_cpp decide the number of threads to use * stop llama runner when app stops * remove sample count and duration metrics * use go generate to get libraries * tmp dir for running llm |
||
|---|---|---|
| .. | ||
| tutorials | ||
| api.md | ||
| development.md | ||
| faq.md | ||
| modelfile.md | ||
| README.md | ||
| tutorials.md | ||