This website requires JavaScript.
Explore
Help
Register
Sign In
squall
/
ollama
Watch
1
Star
0
Fork
0
You've already forked ollama
Code
Issues
Pull Requests
Actions
Packages
Projects
Releases
Wiki
Activity
f8ef4439e9
ollama
/
llm
/
llama.cpp
/
generate_linux.go
4 lines
45 B
Go
Raw
Normal View
History
Unescape
Escape
first pass at linux gpu support (#454) * linux gpu support * handle multiple gpus * add cuda docker image (#488) --------- Co-authored-by: Michael Yang <mxyng@pm.me>
2023-09-12 23:04:35 +08:00
package
llm
Add cgo implementation for llama.cpp Run the server.cpp directly inside the Go runtime via cgo while retaining the LLM Go abstractions.
2023-11-14 09:20:34 +08:00
//go:generate sh ./gen_linux.sh
Reference in New Issue
Copy Permalink