This website requires JavaScript.
Explore
Help
Register
Sign In
squall
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
Code
Issues
Pull Requests
Actions
1
Packages
Projects
Releases
Wiki
Activity
c45f3c3ab6
vllm
/
cacheflow
/
models
History
Woosuk Kwon
88c0268a18
Implement custom kernel for LLaMA rotary embedding (
#14
)
2023-03-30 11:04:21 -07:00
..
__init__.py
Support tensor parallel (
#2
)
2023-03-21 13:45:42 -07:00
attention.py
Implement custom kernel for LLaMA rotary embedding (
#14
)
2023-03-30 11:04:21 -07:00
input_metadata.py
Support tensor parallel (
#2
)
2023-03-21 13:45:42 -07:00
llama.py
Implement custom kernel for LLaMA rotary embedding (
#14
)
2023-03-30 11:04:21 -07:00
memory_analyzer.py
Implement custom kernel for LLaMA rotary embedding (
#14
)
2023-03-30 11:04:21 -07:00
model_utils.py
Implement LLaMA (
#9
)
2023-03-30 12:25:32 +08:00
opt.py
Implement custom kernel for LLaMA rotary embedding (
#14
)
2023-03-30 11:04:21 -07:00
sample.py
Implement custom kernel for LLaMA rotary embedding (
#14
)
2023-03-30 11:04:21 -07:00
utils.py
FastAPI-based working frontend (
#10
)
2023-03-29 14:48:56 +08:00