This website requires JavaScript.
Explore
Help
Register
Sign In
squall
/
ollama
Watch
1
Star
0
Fork
0
You've already forked ollama
Code
Issues
Pull Requests
Actions
Packages
Projects
Releases
Wiki
Activity
58d95cc9bd
ollama
/
llm
/
generate
/
generate_darwin.go
4 lines
53 B
Go
Raw
Normal View
History
Unescape
Escape
Code shuffle to clean up the llm dir
2024-01-05 01:40:15 +08:00
package
generate
Add cgo implementation for llama.cpp Run the server.cpp directly inside the Go runtime via cgo while retaining the LLM Go abstractions.
2023-11-14 09:20:34 +08:00
Switch back to subprocessing for llama.cpp This should resolve a number of memory leak and stability defects by allowing us to isolate llama.cpp in a separate process and shutdown when idle, and gracefully restart if it has problems. This also serves as a first step to be able to run multiple copies to support multiple models concurrently.
2024-03-15 01:24:13 +08:00
//go:generate bash ./gen_darwin.sh
Reference in New Issue
Copy Permalink