2023-06-28 00:08:52 +08:00
|
|
|
# Ollama
|
2023-06-23 00:45:31 +08:00
|
|
|
|
2023-07-04 04:32:48 +08:00
|
|
|
An easy, fast runtime for large language models, powered by `llama.cpp`.
|
2023-06-28 05:13:07 +08:00
|
|
|
|
2023-06-30 23:39:24 +08:00
|
|
|
> _Note: this project is a work in progress. Certain models that can be run with `ollama` are intended for research and/or non-commercial use only._
|
2023-06-28 21:57:36 +08:00
|
|
|
|
|
|
|
|
## Install
|
2023-06-23 00:45:31 +08:00
|
|
|
|
2023-07-01 00:39:25 +08:00
|
|
|
Using `pip`:
|
|
|
|
|
|
2023-06-23 00:45:31 +08:00
|
|
|
```
|
2023-06-28 00:08:52 +08:00
|
|
|
pip install ollama
|
2023-06-23 00:45:31 +08:00
|
|
|
```
|
|
|
|
|
|
2023-07-01 00:39:25 +08:00
|
|
|
Using `docker`:
|
|
|
|
|
|
2023-07-01 00:31:00 +08:00
|
|
|
```
|
|
|
|
|
docker run ollama/ollama
|
|
|
|
|
```
|
|
|
|
|
|
2023-06-28 21:57:36 +08:00
|
|
|
## Quickstart
|
2023-06-26 01:08:03 +08:00
|
|
|
|
2023-06-30 06:25:02 +08:00
|
|
|
To run a model, use `ollama run`:
|
|
|
|
|
|
|
|
|
|
```
|
|
|
|
|
ollama run orca-mini-3b
|
2023-06-26 01:08:03 +08:00
|
|
|
```
|
|
|
|
|
|
2023-06-30 06:25:02 +08:00
|
|
|
You can also run models from hugging face:
|
2023-06-28 00:08:52 +08:00
|
|
|
|
2023-06-30 06:25:02 +08:00
|
|
|
```
|
|
|
|
|
ollama run huggingface.co/TheBloke/orca_mini_3B-GGML
|
|
|
|
|
```
|
2023-06-28 21:57:36 +08:00
|
|
|
|
2023-06-30 06:25:02 +08:00
|
|
|
Or directly via downloaded model files:
|
|
|
|
|
|
|
|
|
|
```
|
|
|
|
|
ollama run ~/Downloads/orca-mini-13b.ggmlv3.q4_0.bin
|
2023-06-28 21:57:36 +08:00
|
|
|
```
|
|
|
|
|
|
2023-07-04 04:32:48 +08:00
|
|
|
## Building
|
|
|
|
|
|
|
|
|
|
```
|
|
|
|
|
go generate ./...
|
|
|
|
|
go build .
|
|
|
|
|
```
|
|
|
|
|
|
2023-06-28 01:46:46 +08:00
|
|
|
## Documentation
|
|
|
|
|
|
|
|
|
|
- [Development](docs/development.md)
|
2023-07-02 05:54:29 +08:00
|
|
|
- [Python SDK](docs/python.md)
|